由于客户之间缺乏数据和统计多样性,联合学习从模型过度适应的巨大挑战面临巨大的挑战。为了应对这些挑战,本文提出了一种新型的个性化联合学习方法,该方法通过贝叶斯变异推断为pfedbayes。为了减轻过度拟合,将重量不确定性引入了客户和服务器的神经网络。为了实现个性化,每个客户端通过平衡私有数据的构建错误以及其KL Divergence与服务器的全局分布来更新其本地分布参数。理论分析给出了平均泛化误差的上限,并说明了概括误差的收敛速率是最小到对数因子的最佳选择。实验表明,所提出的方法在个性化模型上的表现优于其他高级个性化方法,例如Pfedbayes在MNIST,FMNIST和NON-I.I.I.D下,Pfedbayes的表现分别超过其他SOTA算法的其他SOTA算法的表现为1.25%,0.42%和11.71%。有限的数据。
translated by 谷歌翻译
基于会话的建议(SBRS)从会话中捕获项目的依赖项,以推荐下一个项目。近年来,基于图形神经网络(GNN)的SBR已成为SBR的主流,从而受益于GNN在建模复杂依赖性中的优越性。基于对相邻依赖关系的强烈假设,在大多数基于GNN的SBR中,会话中的任何两个相邻项目都必须取决于。但是,我们认为,由于用户行为的不确定性和复杂性,邻接不一定表明依赖性。但是,上述假设并不总是在实际的建议方案中存在,因此它很容易导致两个缺点:(1)会话中发生错误的依赖性,因为存在相邻但没有真正依赖的项目,以及(2)true缺失依赖关系发生在会话中,因为存在非贴种但实际上依赖的项目。这些缺点显着影响项目表示学习,从而降低了下游建议性能。为了解决这些缺陷,我们提出了一种新颖的评论精制的项目间图神经网络(RI-GNN),该图案利用从项目评论中提取的主题信息来改善项目之间的依赖性。两个公共现实世界数据集的实验表明,RI-GNN的表现优于SOTA方法。
translated by 谷歌翻译
尽管在一般强化学习(RL)中建立了良好的建立,但很少在受约束的RL(CRL)中探索基于价值的方法,因为它们无法找到可以在多个动作中随机进行随机的策略的能力。为了将基于价值的方法应用于CRL,最新的游戏理论方法采用了混合策略,该策略将一组精心生成的策略之间随机进行随机,以收敛到所需的约束可满足的策略。但是,这些方法需要存储大量的政策,这不是政策效率的,并且可能会在约束深度RL中产生过高的记忆成本。为了解决这个问题,我们提出了一种替代方法。我们的方法首先将CRL重新制定为等效距离优化问题。使用专门设计的线性优化Oracle,我们得出了一个元叠层,该元值使用任何现成的RL算法和任何条件梯度(CG)型算法作为子例程来求解它。然后,我们提出了CG型算法的新变体,该变体概括了最小范数(MNP)方法。所提出的方法与现有游戏理论方法的收敛速率相匹配,并实现了最差的最佳政策效率。导航任务上的实验表明,我们的方法将记忆成本降低了一个数量级,同时达到了更好的性能,并证明了其有效性和效率。
translated by 谷歌翻译
Traditional supervised learning mostly works on individual tasks and requires training on a large set of task-specific examples. This paradigm seriously hinders the development of task generalization since preparing a task-specific example set is costly. To build a system that can quickly and easily generalize to new tasks, task instructions have been adopted as an emerging trend of supervision recently. These instructions give the model the definition of the task and allow the model to output the appropriate answer based on the instructions and inputs. However, task instructions are often expressed in different forms, which can be interpreted from two threads: first, some instructions are short sentences and are pretrained language model (PLM) oriented, such as prompts, while other instructions are paragraphs and are human-oriented, such as those in Amazon MTurk; second, different end-users very likely explain the same task with instructions of different textual expressions. A robust system for task generalization should be able to handle any new tasks regardless of the variability of instructions. However, the system robustness in dealing with instruction-driven task generalization is still unexplored. This work investigates the system robustness when the instructions of new tasks are (i) maliciously manipulated, (ii) paraphrased, or (iii) from different levels of conciseness. To our knowledge, this is the first work that systematically studies how robust a PLM is when it is supervised by instructions with different factors of variability.
translated by 谷歌翻译
Many NLP tasks can be regarded as a selection problem from a set of options, such as classification tasks, multi-choice question answering, etc. Textual entailment (TE) has been shown as the state-of-the-art (SOTA) approach to dealing with those selection problems. TE treats input texts as premises (P), options as hypotheses (H), then handles the selection problem by modeling (P, H) pairwise. Two limitations: first, the pairwise modeling is unaware of other options, which is less intuitive since humans often determine the best options by comparing competing candidates; second, the inference process of pairwise TE is time-consuming, especially when the option space is large. To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling. Context-TE is able to learn more reliable decision for the H since it considers various context. Second, we speed up Context-TE by coming up with Parallel-TE, which learns the decisions of multiple options simultaneously. Parallel-TE significantly improves the inference speed while keeping comparable performance with Context-TE. Our methods are evaluated on three tasks (ultra-fine entity typing, intent detection and multi-choice QA) that are typical selection problems with different sizes of options. Experiments show our models set new SOTA performance; particularly, Parallel-TE is faster than the pairwise TE by k times in inference. Our code is publicly available at https://github.com/jiangshdd/LearningToSelect.
translated by 谷歌翻译
In natural language processing (NLP), the context of a word or sentence plays an essential role. Contextual information such as the semantic representation of a passage or historical dialogue forms an essential part of a conversation and a precise understanding of the present phrase or sentence. However, the standard attention mechanisms typically generate weights using query and key but ignore context, forming a Bi-Attention framework, despite their great success in modeling sequence alignment. This Bi-Attention mechanism does not explicitly model the interactions between the contexts, queries and keys of target sequences, missing important contextual information and resulting in poor attention performance. Accordingly, a novel and general triple-attention (Tri-Attention) framework expands the standard Bi-Attention mechanism and explicitly interacts query, key, and context by incorporating context as the third dimension in calculating relevance scores. Four variants of Tri-Attention are generated by expanding the two-dimensional vector-based additive, dot-product, scaled dot-product, and bilinear operations in Bi-Attention to the tensor operations for Tri-Attention. Extensive experiments on three NLP tasks demonstrate that Tri-Attention outperforms about 30 state-of-the-art non-attention, standard Bi-Attention, contextual Bi-Attention approaches and pretrained neural language models1.
translated by 谷歌翻译
对话策略模块是任务完成对话系统的重要组成部分。最近,越来越多的兴趣集中在加强学习(RL)的对话政策上。其有利的绩效和明智的行动决策取决于对动作值的准确估计。高估问题是RL的一个众所周知的问题,因为其对最大动作值的估计大于地面真理,这导致了不稳定的学习过程和次优政策。这个问题不利于基于RL的对话政策学习。为了减轻此问题,本文提出了一个动态的部分平均估计器(DPAV),对地面真相最大动作值。 DPAV计算预测的最大动作值和最小动作值之间的部分平均值,其中权重动态自适应和问题依赖性。我们将DPAV纳入了对话策略,并将DPAV纳入了对话策略,并表明我们的方法可以在不同域的三个对话数据集中获得更好或可比较的结果,并具有较低的计算负载。另外,与其他方法相比,理论上还证明了收敛性并得出偏置的上限和下限。
translated by 谷歌翻译
基于会话的新闻推荐系统将下一个新闻推荐给用户,通过建模她/他在会话中嵌入的一系列新闻读/点击的潜在兴趣。通常,用户的兴趣是多种多样的,即有多种兴趣,与不同类型的新闻相对应,例如在会话中,有关独特主题的新闻。 %对这种多重兴趣进行建模对于精确的新闻建议至关重要。但是,大多数现有方法通常忽略了如此重要的特征,因此无法区分和建模用户的潜在多重兴趣,从而阻碍了下一个新闻的准确建议。因此,本文提出了新闻推荐的多功能新闻序列(分钟)模型。在几分钟内,设计了一个基于自我注意的新闻编码器,以学习每个新闻的信息嵌入信息,然后设计出一个新颖的平行兴趣网络,以提取新闻顺序中嵌入的潜在多重兴趣,以准备下一步 - 新建议。现实世界中数据集的实验结果表明,我们的模型可以比最新的模型获得更好的性能。
translated by 谷歌翻译
理解文章需要了解其成分事件。但是,所提到事件的上下文通常缺乏此事件的细节。然后,除了上下文之外,我们还可以在哪里获得更多关于这种特定事件的知识?这项工作定义了事件链接,在事件级别的新自然语言理解任务。事件链接尝试链接事件提及,例如在新闻文章中出现,例如,最合适的维基百科页面。该页面预计将提供有关事件所指的丰富知识。为了标准化对这一新问题的研究,我们的贡献三折。首先,这是社区中的第一个工作,它正式定义事件链接任务。其次,我们为此新任务收集一个数据集。具体而言,我们首先从维基百科自动收集培训设置,然后创建两个评估集:一个来自维基百科域的域,报告域中的性能;另一个来自真实世界新闻域,测试域外的性能。第三,我们提出Evelink,首先是事件连接方法。总体而言,事件链接是一个很大的具有挑战性的任务,需要更多来自社区的努力。数据和代码可在此处提供:https://github.com/cogcomp/event-linking。
translated by 谷歌翻译
静态场景的新颖观看综合在生产照片逼真的结果方面取得了显着的进步。但是,对于动态内容的沉浸式渲染,仍然存在关键挑战。例如,一种基于精英图像的渲染框架之一,多平面图像(MPI)为静态场景产生高新的观看合成质量,但面临着建模动态部分的难度。此外,通过MPI建模动态变化可能需要庞大的存储空间和长期推理时间,其阻碍了其在实时方案中的应用。在本文中,我们提出了一种新的颞型MPI表示,其能够在整个视频中以紧凑的时间编码整个视频中的丰富的3D和动态变化信息。由于高度紧凑且表现力的潜在基础和共同学习的系数,任意时间实例的新颖 - 实例将能够实时具有高视觉质量。我们显示给定的可比内存消耗,我们提出的时间 - MPI框架能够生成时间实例MPI,只有0.002秒,速度快3000倍,与其他状态相比,3DB更高的平均视图合成PSNR - 艺术动态场景建模框架。
translated by 谷歌翻译